In this blog, we’ll walk through how to test criterion validity, why it matters, and practical steps you can follow. Whether you’re working on a PhD dissertation (like yours!), a master’s thesis or any quantitative study, this is a valuable guide. 📚
Two types of criterion validity:
Concurrent validity: Measure is compared with the criterion (standard) at the same time.
Predictive validity: Measure predicts a criterion that will be observed in the future.
🧮 In formula form:
- With high criterion validity, findings are more credible (especially for peer-review or doctoral research).
- It allows stakeholders (e.g., supervisors, institutions, policy makers) to trust the measurement instrument.
- It strengthens conclusions drawn from the data rather than relying solely on theoretical or content validity.
This could be an established instrument (for concurrent validity) or an outcome (for predictive validity). For example: For a leadership behavior scale, the criterion might be an existing validated leadership measure (concurrent) or future job performance (predictive).
Concurrent: Administer a new instrument and the criterion instrument concurrently to the same sample.
Predictive: Administer a new instrument now, then measure the criterion at some future point (e.g., six months later).
Use Pearson’s r (if both variables are continuous and normally distributed) to test the association.
Be transparent about:
- The criterion used and why
- The sample size
- The correlation value and significance (p-value)
- Any limitations (e.g., time lag, sample characteristics)
Developing a scale measuring entrepreneurial orientation in the context of the Nepalese economy. Want to test criterion validity with a known predictor of firm performance.
- Criterion: Firm performance index measured one year later.
- Your measure: Entrepreneurial orientation score at Time 1.
- Administer to firms.
- Compute .
Interpretation: A moderate positive correlation (0.52) indicates fair predictive validity. This Means:
“The correlation between the entrepreneurial orientation scale and firm performance one year later was , indicating moderate predictive criterion validity.”
- Ensure your criterion measure is valid and reliable testing criterion validity requires a strong benchmark.
- Use appropriately sized samples small samples can lead to unstable correlation coefficients.
- In the case of predictive validity, ensure the time lag is appropriate (not too short so the outcome hasn’t had time to occur; not too long so you lose participants).
- Report whether the correlation is statistically significant and the effect size (so readers understand practical importance, not just significance).
- Always discuss limitations: e.g., sample bias, criterion measure issues, measurement error.
Testing criterion validity strengthens the rigor of your research instruments. Whether you’re working on a scale, survey, or measurement tool, following the steps above will help ensure you report a robust validity assessment an essential element in dissertations and publications.
- How to Test Criterion Validity in Research. (n.d.). YouTube. https://www.youtube.com/watch?v=TYwnn9Bu8IU

.png)

Post a Comment